Goto

Collaborating Authors

 average american


Whose Preferences? Differences in Fairness Preferences and Their Impact on the Fairness of AI Utilizing Human Feedback

Lerner, Emilia Agis, Dorner, Florian E., Ash, Elliott, Goel, Naman

arXiv.org Artificial Intelligence

There is a growing body of work on learning from human feedback to align various aspects of machine learning systems with human values and preferences. We consider the setting of fairness in content moderation, in which human feedback is used to determine how two comments -- referencing different sensitive attribute groups -- should be treated in comparison to one another. With a novel dataset collected from Prolific and MTurk, we find significant gaps in fairness preferences depending on the race, age, political stance, educational level, and LGBTQ+ identity of annotators. We also demonstrate that demographics mentioned in text have a strong influence on how users perceive individual fairness in moderation. Further, we find that differences also exist in downstream classifiers trained to predict human preferences. Finally, we observe that an ensemble, giving equal weight to classifiers trained on annotations from different demographics, performs better for different demographic intersections; compared to a single classifier that gives equal weight to each annotation.


AI is 'intimidating,' 'dangerous': Members of Congress reveal how much they know about artificial intelligence

FOX News

Calls to regulate artificial intelligence are growing on Capitol Hill following a dire warning from tech giants. But many lawmakers also admit they don't know much more about the technology than the average American. WASHINGTON, D.C. – Calls to regulate artificial intelligence are growing on Capitol Hill following a dire warning from tech giants. But many lawmakers also admit they don't know much more about the technology than the average American. "I've had ChatGPT demonstrated to me by a friend, and its capabilities are kind of intimidating," Sen. Cynthia Lummis told Fox News.


Assessing Cross-Cultural Alignment between ChatGPT and Human Societies: An Empirical Study

Cao, Yong, Zhou, Li, Lee, Seolhwa, Cabello, Laura, Chen, Min, Hershcovich, Daniel

arXiv.org Artificial Intelligence

The recent release of ChatGPT has garnered widespread recognition for its exceptional ability to generate human-like responses in dialogue. Given its usage by users from various nations and its training on a vast multilingual corpus that incorporates diverse cultural and societal norms, it is crucial to evaluate its effectiveness in cultural adaptation. In this paper, we investigate the underlying cultural background of ChatGPT by analyzing its responses to questions designed to quantify human cultural differences. Our findings suggest that, when prompted with American context, ChatGPT exhibits a strong alignment with American culture, but it adapts less effectively to other cultural contexts. Furthermore, by using different prompts to probe the model, we show that English prompts reduce the variance in model responses, flattening out cultural differences and biasing them towards American culture. This study provides valuable insights into the cultural implications of ChatGPT and highlights the necessity of greater diversity and cultural awareness in language technologies.


Moving from Red AI to Green AI, Part 1: How to Save the Environment and Reduce Your Hardware Costs

#artificialintelligence

Machine learning, and especially deep learning, has become increasingly more accurate in the past few years. This has improved our lives in ways we couldn't imagine just a few years ago, but we're far from the end of this AI revolution. Cars are driving themselves, x-ray photos are being analyzed automatically, and in this pandemic age, machine learning is being used to predict outbreaks of the disease, help with diagnosis, and make other critical healthcare decisions. And for those of us who are sheltering at home, recommendation engines in video on-demand platforms help us forget our troubles for an hour or two. This increase in accuracy is important to make AI applications good enough for production, but there has been an explosion in the size of these models.


How AI Is Reinventing the Relationship Between Banks, Credit Cards, and Consumers

#artificialintelligence

Getting a credit card has never been easier. Yet the rise in issued cards exacerbates the ever-present challenges that face issuers, banks, and above all else, consumers. User loyalty is at an all-time low. Banks and issuers struggle to retain existing cardholders due to the plethora of rewards programs incentivizing customers to apply for new cards. At the same time, consumers are having a difficult time paying off their balances, driving card debt and delinquencies to record levels.


Artificial intelligence is now smarter than the average American, researchers reveal

#artificialintelligence

COMPUTERS can already hold a massive amount of instantly retrievable data in a manner that puts most humans to shame, but getting them to actually display intelligence is an entirely different challenge. Now a team of researchers from Northwestern University just made a huge stride toward that goal with a computational model that actually outperforms the average American adult in a standard intelligence test. As PhysOrg reports, the witty computer system utilizes an AI platform called CogSketch that gives it the power to solve visual problems just by looking at them, which is something that has traditionally held back many examples of artificial intelligence, reports the New York Post. Being able to visually understand, interpret, and then use that data to come to a solution brings the computer system closer to the functioning of the human brain than many before it, and so the team pitted its creation against a popular standardised test called Raven's Progressive Matrices. The Raven's test (or RPM for short) is composed of 60 multiple-choice questions that measure the taker's ability to reason, using visual puzzles.